Introduction

Recognising objects and identifying shapes in images is usually an easy task for human, it is however difficult to automate. The field of computer vision is concerned with automating such processes. It aims to extract information from images (or video sequences of images) in order to achieve what a human visual system can. (“Computer Vision” 2020)

Active contour models (also called snakes) is a class of algorithms for finding boundaries of shapes. These methods formulate the problem as an optimisation process while attempting to balance between matching to the image and ensuring the result is smooth. (“Snakes” 2011)

In the scope of this project, we will explore a few active contour models with the help of The Numerical Tours of Signal Processing (Peyré 2011).

Snakes method

A snake is a smooth curve, similar to a spline. The concept of a snakes method is to find smooth curves that match the image features by iteratively minimizing the energy function of the snakes. (“Active Contour Model” 2020)

Illustration of the snakes model

The energy of the snakes is a combination of internal and external energy (“Snakes” 2011)

  • Internal Energy: a metric that measures the curve’s smoothness or regularity.
  • External Energy: a metric for measuring the data fidelity.

Classification and representation

Curves can be divided into two types: open curves and closed curves.

  • An open curve has two distinct ends and does not form a loop.
  • A closed curve is a curve with no endpoints and which completely encloses an area. (Weisstein n.d.)

A curve has two different representations: parametric or cartesian.

In the parametric form, the points of the curve are expressed as a function of a real variable, conventionnally denoted \(t\) representing time.

For instance, the parametric representation of circle in \(\mathbb{R}^2\) is given by \[ \gamma: t\mapsto \begin{pmatrix}x_0+r\cos t\\y_0+r\sin t\end{pmatrix} \] where the points of the curve are defined as \(\Gamma = \mathrm{Im}(\gamma) = \left\{\gamma(t), t\in\mathbb{R}\right\}\).

The cartesian representation is an equation (or a set of equations) that describes the relations between the coordinates of the points of the curve. Such representation can be explicit \(y=f(x)\) or implicit \(f(x,y)=0\).

In the case of a circle, we can express it implicitly by \[ \Gamma = \left\{(x,y)\in\mathbb{R}^2\left\lvert\begin{matrix}(x-x_0)^2 + (y-y_0)^2 = r\end{matrix}\right.\right\} \]

or explicitly \[ \Gamma = \left\{(x,y)\in\mathbb{R}^2\left\lvert\begin{matrix}y=y_0+\sqrt{r-(x-x_0)^2}\\y=y_0-\sqrt{r-(x-x_0)^2}\end{matrix}\right.\right\} \]

Generally speaking, the parametric representation can be more expressive than cartesian equations. It is also worth noting that tracking a curve’s behaviour in a small neighborhood of a point on the curve is much simpler as the derivative of the parametric function \(\gamma'(t)\) is easy to calculate and study.

In the following sections we will study active contours with respect to their curve representation, as described by G. Peyré (2011).

  • Parametric edge-based active contours
  • Implicit edge-based active contours
  • Region-based active contours

Parametric Active Contours

Formulation

In this section we will study an active contour method using a parametric curve mapped into the complex plane as we only manipulate 2D images.

\[ \gamma : [0,1] \mapsto \mathbb{C}\]

To implement our methods, we consider the discrete piecewise affine curve composed of \(p\) segments, the parametric function can therefore be considered a vector \(\gamma\in\mathbb{C}^p\).

Let’s initialize a closed curve which is in the discrete case a polygon.

x0 = np.array([.78, .14, .42, .18, .32, .16, .75, .83, .57, .68, .46, .40, .72, .79, .91, .90])
y0 = np.array([.87, .82, .75, .63, .34, .17, .08, .46, .50, .25, .27, .57, .73, .57, .75, .79])
gamma0 = x0 + 1j * y0

It would be useful to have a plot wrapper for our curve format for the rest of this section. We would want to link the last point with the first \(\gamma_{p+1}=\gamma_1\).

# close the curve
periodize = lambda gamma: np.concatenate((gamma, [gamma[0]]))

# plot wrapper
def cplot(gamma, s='b', lw=1, show=False): 
    gamma = periodize(gamma)
    _ = plt.plot(gamma.real, gamma.imag, s, linewidth=lw)
    _ = plt.axis('tight')
    _ = plt.axis('equal')
    _ = plt.axis('off')
    if show:
        plt.show()

# plot
cplot(gamma0, 'b.-', show=True)

Let’s define a few inline functions for resampling the curve according to its length.

def resample(gamma, p, periodic=True):
    if periodic:
        gamma = periodize(gamma)
    # calculate segments lengths
    d = np.concatenate(([0], np.cumsum(1e-5 + np.abs(gamma[:-1] - gamma[1:]))))
    # interpolate gamma at new points
    return np.interp(np.linspace(0, 1, p, endpoint=False),  d/d[-1], gamma)

Let’s initialize \(\gamma_1\in\mathbb{C}^p\) for \(p=256\).

# resample gamma
gamma1 = resample(gamma0, p=256)
cplot(gamma1, 'k', True)

Define forward and backward finite difference for approximating derivatives.

BwdDiff = lambda c: c - np.roll(c, +1)
FwdDiff = lambda c: np.roll(c, -1) - c

Thanks to these helper function, we can now define the tangent and the normal of \(\gamma\) at each point. We define the tangent as the normalized vector in the direction of the derivative of \(\gamma\) at a given time. \[ t_{\gamma}(t) = \frac{\gamma'(t)}{\left\lVert\gamma'(t)\right\rVert} \]

The normal at a given time is simply the orthogonal vector to the tangent. \[ n_{\gamma}(t) = t_{\gamma}(t)^{\bot}\].

normalize = lambda v: v / np.maximum(np.abs(v), 1e-10) # disallow division by 0
tangent = lambda gamma: normalize(FwdDiff(gamma))
normal = lambda gamma: -1j * tangent(gamma)

Let’s show the curve moved in the normal direction \(\gamma_1(t) \pm \Delta n_{\gamma_1}(t)\)

delta = .03
dn = delta * normal(gamma1)
gamma2 = gamma1 + delta * normal(gamma1)
gamma3 = gamma1 - delta * normal(gamma1)
cplot(gamma1, 'k')
cplot(gamma1 + dn, 'r--')
cplot(gamma1 - dn, 'b--')
plt.show()

Now that we have defined our curve and implemented basic functions for manipulating and displaying our curves, let’s dive into the method.

Curve evolution

A curve evolution is a series of curves \(s\mapsto\gamma_s\) indexed by an evolution parameter \(s \geq 0\). The intial curve \(\gamma_0\) is evolved by minimizing the curve’s energy \(E(\gamma_s)\) using the gradient descent algorithm. Which corresponds to minizing the energy flow. \[ \frac{\mathrm{d}}{\mathrm{d}s} \gamma_s = -\nabla E(\gamma_s) \]

The numerical implementation of the method is formulated as \[ \gamma^{(k+1)} = \gamma^{(k)} - \tau_k\cdot E(\gamma^{(k)}) \]

In order to define our energy, we consider a smooth Riemannian manifold equipped with an inner product on the tangent space at each point of the curve.

The inner product along the curve is defined by \[ \left\langle\mu{,}\nu\right\rangle_{\gamma} = \int_0^1 \left\langle\mu(t){,}\nu(t)\right\rangle \left\lVert\gamma'(t)\right\rVert \mathrm{d}t\]

In this section we will consider intrinsic energy functions.

Intristic curve evolution

Intrinsic energy is defined along the normal, it only depends on the curve itself. We express the curve evolution as the speed along the normal.

\[ \frac{\mathrm{d}}{\mathrm{d}s} \gamma_s(t) = \underbrace{\beta\left(\gamma_s(t),n_s(t),\kappa_s(t)\right)}_{\text{speed}} n_s(t) \]

where \(\kappa_{\gamma}(t)\) is the intrinsic curvature of \(\gamma(t)\) defined as \[ \kappa_{\gamma}(t) = \frac{1}{\left\lVert\gamma'(t)\right\rVert^2} \left\langle n'(t){,}\gamma'(t)\right\rangle \]

The speed term \(\beta(\gamma,n,\kappa)\) is defined by the method.

Mean curvature motion

Evolution by mean curvature is based on minimizing the curve length and consequently its curvature. It is in fact the simplest curve evolution method.

The energy is therefore simply the length of the curve \[ E(\gamma) = \int_0^1 \left\lVert\gamma'(t)\right\rVert \mathrm{d}t\]

The energy gradient is therefore is therefore \[ \nabla E(\gamma_s)(t) = -\kappa_s(t) \cdot n_s(t) \]

In the method, the speed function defined simply as \(\beta(\gamma,n,\kappa) = \kappa\).

For simplifying calculations, we define the function curve_step that calculates a curve step along the normal: \(\mathrm{d}\gamma_s(t) = \kappa_s(t) \cdot n_s(t)\).

curve_step = lambda gamma: BwdDiff(tangent(gamma)) / np.abs(FwdDiff(gamma))

We perform this method on \(\gamma_1\).

# initialize method
dt = 0.001 / 100            # time step
Tmax = 3.0 / 100            # stop time
niter = round(Tmax / dt)    # number of iterations
nplot = 10                  # number of plots
plot_interval = round(niter / nplot)

gamma = gamma1              # initial curve
plot_iter = 0               # plot iterator

for i in range(niter + 1):
    gamma += dt * curve_step(gamma)     # evolve curve
    gamma = resample(gamma, p=256)      # resample curve
    if i == plot_iter:
        lw = 4 if i in [0, niter] else 1
        cplot(gamma, 'r', lw)           # plot curve
        plot_iter += plot_interval      # increment plots
plt.show()

Geodesic motion

In a Riemannian manifold, the geodesic distance is the shortest path between two points, which is formulated as the weighted length. \[ L(\gamma) = \int_0^1 W(\gamma(t)) \left\lVert\gamma'(t)\right\rVert \mathrm{d}t\]

Where the weight \(W(\cdot)\) is the geodesic metric defined as the square root of the quadratic form associated to the geodesic metric tensor \(g(\cdot,\cdot)\). \[W(x) = \sqrt{g(x,x)} \geq 0\]

Let’s implement a random synthetic weight \(W(x)\).

n = 200
nbumps = 40
theta = np.random.rand(nbumps,1)*2*np.pi
r = .6*n/2
a = np.array([.62*n,.6*n])
x = np.around(a[0] + r*np.cos(theta))
y = np.around(a[1] + r*np.sin(theta))
W = np.zeros([n,n])
for i in np.arange(0,nbumps):
    W[int(x[i]), int(y[i])] = 1
W = gaussian_blur(W, 6.0)
W = rescale(-np.minimum(W, .05), .3,1)
imageplot(W)
plt.show()

Calculate the gradient of the metric.

# calculate gradient
G = grad(W)
G = G[:,:,0] + 1j*G[:,:,1]

# display its magnitude
imageplot(np.abs(G))
plt.show()

Let’s define functions for evaluating \(W(\gamma(t))\) and \(\nabla W(\gamma(t))\).

EvalW = lambda W, gamma: bilinear_interpolate(W, gamma.imag, gamma.real)
EvalG = lambda G, gamma: bilinear_interpolate(G, gamma.imag, gamma.real)

In this part we will work with circular shapes, let’s define a few constants for simple access.

PI = np.pi
TAU = 2 * PI
PI_2 = 0.5 * PI

Now let’s test the method by creating a circular curve.

r = .98 * n/2   # radius
p = 128         # number of curve segments
i_theta = np.linspace(0, TAU * 1j, p, endpoint=False)
im_center = (1 + 1j) * (n / 2)
gamma0 = im_center + r * np.exp(i_theta)
dotp = lambda a, b: a.real * b.real + a.imag * b.imag
def geodesic_active_contour(gamma, W, f, p, dt, Tmax, n_plot, periodic=True, endpts=None):
    # initialize iteration variables
    niter = round(Tmax / dt)
    plot_interval = round(niter / nplot)
    plot_iter = 0

    # calculate gradient
    G = grad(W)
    G = G[:,:,0] + 1j * G[:,:,1]
    
    imageplot(f.T)
    for i in range(niter + 1):
        N = normal(gamma)                       # calculate normal
        gamma_step = EvalW(W, gamma) * curve_step(gamma) - dotp(EvalG(G, gamma), N) * N
        gamma += dt * gamma_step                # evolve curve
        gamma = resample(gamma, p, periodic)    # resample curve
        if i == plot_iter:
            lw = 4 if i in [0, niter] else 1
            cplot(gamma, 'r', lw)               # plot curve
            plot_iter += plot_interval          # increment plots
            if not periodic and endpts is not None:
                gamma[0], gamma[-1] = endpts
                _ = plt.plot(gamma[0].real,  gamma[0].imag, 'b.', markersize=20)
                _ = plt.plot(gamma[-1].real, gamma[-1].imag, 'b.', markersize=20)
    plt.show()

geodesic_active_contour(gamma0, W, W, p=128, dt=1, Tmax=5000, n_plot=10)

## initialize method
#dt = 1                      # time step
#Tmax = 5000                 # stop time
#niter = round(Tmax / dt)    # number of iterations
#nplot = 10                  # number of plots
#plot_interval = round(niter / nplot)
#
#gamma = gamma0              # initial curve
#plot_iter = 0               # plot iterator
#imageplot(W.T);
#for i in range(niter + 1):
#    N = normal(gamma)
#    gamma_step = EvalW(gamma) * curve_step(gamma) - dotp(EvalG(gamma), N) * N
#    gamma += dt * gamma_step            # evolve curve
#    gamma = resample(gamma, p=128)      # resample curve
#    if i == plot_iter:
#        lw = 4 if i in [0, niter] else 1
#        cplot(gamma, 'r', lw)           # plot curve
#        plot_iter += plot_interval      # increment plots
#plt.show()

Medical Image Segmentation

n = 256
name = 'nt_toolbox/data/cortex.bmp'
f = load_image(name, n)
imageplot(f)
plt.show()

G = grad(f)
d0 = np.sqrt(np.sum(G**2, 2))
imageplot(d0)
plt.show()

a = 2
d = gaussian_blur(d0, a)
imageplot(d)
plt.show()

d = np.minimum(d, .4)
W = rescale(-d, .8, 1)
imageplot(W)
plt.show()

r = .95*n/2
p = 128 # number of points on the curve
theta = np.transpose( np.linspace(0, 2*np.pi, p + 1) )
theta = theta[0:-1]
gamma0 = n/2*(1+1j) +  r*(np.cos(theta) + 1j*np.sin(theta))
gamma = gamma0
imageplot(np.transpose(f))
cplot(gamma, 'r', 2)
plt.show()

#dt = 2
#Tmax = 9000
#niter = round(Tmax / dt)
#
#G = grad(W);
#G = G[:,:,0] + 1j*G[:,:,1]
#gamma = gamma0
#displist = np.around(np.linspace(0,niter,10))
#k = 0
#imageplot(f.T)
#for i in np.arange(0,niter+1):
#    n = normal(gamma)
#    g = EvalW(gamma) * curve_step(gamma) - dotp(EvalG(gamma), n) * n
#    gamma = resample(gamma + dt*g, p=128)
#    if i==displist[k]:
#        lw = 1
#        if i==0 or i==niter:
#            lw = 4
#        cplot(gamma, 'r', lw);
#        k = k+1
#plt.show()
geodesic_active_contour(gamma0, W, f, p=128, dt=1, Tmax=9000, n_plot=10)

n = 256
f = load_image(name, n)
f = f[45:105, 60:120]
n = f.shape[0]
imageplot(f)
plt.show()

G = grad(f)
G = np.sqrt(np.sum(G**2,2))
sigma = 1.5
G = gaussian_blur(G,sigma)
G = np.minimum(G,.4)
W = rescale(-G,.4,1)
imageplot(W)
plt.show()

x0 = 4 + 55j
x1 = 53 + 4j
p = 128
t = np.linspace(0, 1, p).T
gamma0 = t*x1 + (1-t)*x0
gamma = gamma0
imageplot(W.T)
cplot(gamma,'r', 2)
_ = plt.plot(gamma[0].real, gamma[0].imag, 'b.', markersize=20)
_ = plt.plot(gamma[-1].real, gamma[-1].imag, 'b.', markersize=20)
plt.show()

dt = 1 / 10
Tmax = 2000*4/ 7
niter = round(Tmax/ dt)
G = grad(W)
G = G[:,:,0] + 1j*G[:,:,1]
gamma = gamma0
displist = np.around(np.linspace(0,niter,10))
k = 0
imageplot(f.T)
for i in range(0,niter+1):
    N = normal(gamma)
    g = EvalW(W, gamma) * curve_step(gamma) - dotp(EvalG(G, gamma), N) * N
    gamma = gamma + dt*g
    gamma = resample(gamma, p=128, periodic=False)
    # impose start/end point
    gamma[0] = x0
    gamma[-1] = x1
    if i==displist[k]:   
        lw = 1
        if i==0 or i==niter:
            lw = 4
        cplot(gamma, 'r', lw);
        k = k+1
        _ = plt.plot(gamma[0].real,  gamma[0].imag, 'b.', markersize=20)
        _ = plt.plot(gamma[-1].real, gamma[-1].imag, 'b.', markersize=20)
plt.show()
#geodesic_active_contour(gamma0, W, f, p=128, dt=0.1, Tmax=8000/7, n_plot=10,
#    periodic=False, endpts=(x0, x1))

Level-Set Method

The Level-Set Method (LSM) allows manipulating manipulating hypersurfaces in different contexts without having their parametric representation.

LSM can be used to track changing topologies as well as contour detection in image processing.

Basic terminology

  • level-set
  • phi
  • etc

Exo 1

n = 200
Y, X = np.meshgrid(np.arange(1,n+1), np.arange(1,n+1))
r = n / 3

c = np.array([r,r]) + 10
phi1 = np.sqrt((X-c[0])**2 + (Y-c[1])**2) - r

c = n - c
phi2 = np.maximum(abs(X-c[0]), abs(Y-c[1])) - r

from nt_toolbox.plot_levelset import *
plt.figure()
<Figure size 700x500 with 0 Axes>
plt.subplot(121)
<matplotlib.axes._subplots.AxesSubplot object at 0x7faea0acae80>
plot_levelset(phi1)
plt.subplot(122)
<matplotlib.axes._subplots.AxesSubplot object at 0x7fae9e8fa100>
plot_levelset(phi2)
plt.show()

Exo 2

plt.figure(figsize = (10,5))
<Figure size 1000x500 with 0 Axes>
phi0 = np.minimum(phi1, phi2)

plt.subplot(1,2,1)
<matplotlib.axes._subplots.AxesSubplot object at 0x7fae9e67d640>
plot_levelset(phi0)
plt.title("Union")
Text(0.5, 1.0, 'Union')
plt.subplot(1,2,2)
<matplotlib.axes._subplots.AxesSubplot object at 0x7faea0aca6a0>
plot_levelset(np.maximum(phi1, phi2))
plt.title("Intersection")
Text(0.5, 1.0, 'Intersection')
plt.show()

Exo 3

from nt_toolbox.grad import *
from nt_toolbox.div import *

Tmax = 200
tau = .5
niter = int(Tmax/tau)


plt.figure(figsize=(10,10))
<Figure size 1000x1000 with 0 Axes>
phi = np.copy(phi0) #initialization
eps = np.finfo(float).eps
k = 0

for i in range(1,niter+1):
    g0 = grad(phi, order=2)
    d = np.maximum(eps*np.ones([n,n]), np.sqrt(np.sum(g0**2, 2)))
    g = g0/np.repeat(d[:,:,np.newaxis], 2, 2)
    K = d*div(g[:,:,0], g[:,:,1], order=2)
    phi = phi + tau*K
    if i % int(niter/4.) == 0:
        k = k + 1
        plt.subplot(2, 2, k)
        plot_levelset(phi)
<matplotlib.axes._subplots.AxesSubplot object at 0x7fae9e5e1af0>
<matplotlib.axes._subplots.AxesSubplot object at 0x7fae9e609a90>
<matplotlib.axes._subplots.AxesSubplot object at 0x7fae9deae910>
<matplotlib.axes._subplots.AxesSubplot object at 0x7fae9de567f0>
plt.show()

Exo 4

# problem cause
phi = phi0**3
from nt_toolbox.perform_redistancing import *
phi1 = perform_redistancing(phi0)

plt.figure(figsize=(10,5))
<Figure size 1000x500 with 0 Axes>
plt.subplot(1,2,1)
<matplotlib.axes._subplots.AxesSubplot object at 0x7fae9de464c0>
plot_levelset(phi)
plt.title("Before redistancing")
Text(0.5, 1.0, 'Before redistancing')
plt.subplot(1,2,2)
<matplotlib.axes._subplots.AxesSubplot object at 0x7fae9cfb5970>
plot_levelset(phi1)
plt.title("After redistancing")
Text(0.5, 1.0, 'After redistancing')
plt.show()

n = 200
f0 = rescale(load_image("nt_toolbox/data/cortex.bmp", n))
g = grad(f0, order=2)
d0 = np.sqrt(np.sum(g**2, 2))
a = 5
from nt_toolbox.perform_blurring import *
d = perform_blurring(d0, np.asarray([a]),bound="per")
[200 200]
epsilon = 1e-1
W = 1./(epsilon + d)
W = rescale(-d, 0.1, 1)

plt.figure(figsize=(10,5))
<Figure size 1000x500 with 0 Axes>
imageplot(f0, "Image to segment", [1,2,1])
imageplot(W, "Weight", [1,2,2])
plt.show()

Y,X = np.meshgrid(np.arange(1,n+1), np.arange(1,n+1))
r = n/3
c = np.asarray([n,n])/2
phi0 = np.maximum(abs(X-c[0]), abs(Y-c[1])) - r
plt.figure(figsize=(5,5))
<Figure size 500x500 with 0 Axes>
plot_levelset(phi0, 0, f0)
plt.show()

Exo 5

tau = .4
Tmax = 1500
niter = int(Tmax/tau)
phi = np.copy(phi0)
gW = grad(W, order=2)


gD = grad(phi, order=2)
d = np.maximum(eps*np.ones([n,n]), np.sqrt(np.sum(gD**2, 2)))
g = gD/np.repeat(d[:,:,np.newaxis], 2, 2)
G = - W*d*div(g[:,:,0], g[:,:,1], order=2) - np.sum(gW*gD,2)

Exo 6

plt.figure(figsize=(10,10))
<Figure size 1000x1000 with 0 Axes>
phi = np.copy(phi0)
k = 0
gW = grad(W, order=2)

for i in range(1,niter+1):
    gD = grad(phi, order=2)
    d = np.maximum(eps*np.ones([n,n]), np.sqrt(np.sum(gD**2, 2)))
    g = gD/np.repeat(d[:,:,np.newaxis], 2, 2)
    G = W*d*div(g[:,:,0], g[:,:,1], order=2) + np.sum(gW*gD,2)
    phi = phi + tau*G
    if i % 30 == 0:
        phi = perform_redistancing(phi)
    if i % int(niter/4.) == 0:
        k = k + 1
        plt.subplot(2, 2, k)
        plot_levelset(phi,0,f0)
<matplotlib.axes._subplots.AxesSubplot object at 0x7fae9e98f250>
<matplotlib.axes._subplots.AxesSubplot object at 0x7fae9e690fd0>
<matplotlib.axes._subplots.AxesSubplot object at 0x7fae9cf6d9a0>
<matplotlib.axes._subplots.AxesSubplot object at 0x7fae9e938850>
plt.show()

Exo 7

plt.figure(figsize=(10,5))
<Figure size 1000x500 with 0 Axes>
Y,X = np.meshgrid(np.arange(1,n+1), np.arange(1,n+1))
k = 4 #number of circles
r = .3*n/k
phi0 = np.zeros([n,n])+np.float("inf")

for i in range(1,k+1):
    for j in range(1,k+1):
        c = (np.asarray([i,j]) - 1)*(n/k) + (n/k)*.5
        phi0 = np.minimum(phi0,np.sqrt(abs(X-c[0])**2 + abs(Y-c[1])**2) - r)
        
plt.subplot(1,2,1)
<matplotlib.axes._subplots.AxesSubplot object at 0x7fae9e609880>
plot_levelset(phi0,0)
plt.subplot(1,2,2)
<matplotlib.axes._subplots.AxesSubplot object at 0x7fae9de2a670>
plot_levelset(phi0, 0, f0)
plt.show()

Exo 8

lambd = 2
c1 = .7
c2 = 0
tau = .5

Tmax = 100
niter = int(Tmax/ tau)
phi = np.copy(phi0)

gD = grad(phi, order=2)
d = np.maximum(eps*np.ones([n,n]), np.sqrt(np.sum(gD**2, 2)))
g = gD/np.repeat(d[:,:,np.newaxis], 2, 2)
G = d*div(g[:,:,0], g[:,:,1], order=2) - lambd*(f0-c1)**2 + lambd*(f0-c2)**2

Exo 9

plt.figure(figsize=(10,10))
<Figure size 1000x1000 with 0 Axes>
phi = np.copy(phi0)
k = 0

for i in range(1,niter+1):
    gD = grad(phi, order=2)
    d = np.maximum(eps*np.ones([n,n]), np.sqrt(np.sum(gD**2, 2)))
    g = gD/np.repeat(d[:,:,np.newaxis], 2, 2)
    G = d*div(g[:,:,0], g[:,:,1], order=2) - lambd*(f0-c1)**2 + lambd*(f0-c2)**2
    phi = phi + tau*G
    if i % 30 == 0:
        phi = perform_redistancing(phi)
    if i % int(niter/4.) == 0:
        k = k + 1
        plt.subplot(2, 2, k)
        plot_levelset(phi,0,f0)
<matplotlib.axes._subplots.AxesSubplot object at 0x7faea1650130>
<matplotlib.axes._subplots.AxesSubplot object at 0x7fae9e940b20>
<matplotlib.axes._subplots.AxesSubplot object at 0x7fae9e690310>
<matplotlib.axes._subplots.AxesSubplot object at 0x7fae9e5e1040>
plt.show()

Conclusion

Références

Peyré, Gabriel. 2011. “The Numerical Tours of Signal Processing - Advanced Computational Signal and Image Processing.” IEEE Computing in Science and Engineering 13 (4): 94–97. https://hal.archives-ouvertes.fr/hal-00519521.

Weisstein, Eric W. n.d. “Closed Curve.” Text. Accessed February 23, 2020. http://mathworld.wolfram.com/ClosedCurve.html.